32 research outputs found

    Motion tracking of iris features to detect small eye movements

    Get PDF
    The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift

    Portable Eyetracking: A Study of Natural Eye Movements

    Get PDF
    Visual perception, operating below conscious awareness, effortlessly provides the experience of a rich representation of the environment, continuous in space and time. Conscious visual perception is made possible by the \u27foveal compromise,\u27 the combination of the high-acuity fovea and a sophisticated suite of eye movements. Our illusory visual experience cannot be understood by introspection, but monitoring eye movements lets us probe the processes of visual perception. Four tasks representing a wide range of complexity were used to explore visual perception; image quality judgments, map reading, model building, and hand-washing. Very short fixation durations were observed in all tasks, some as short as 33 msec. While some tasks showed little variation in eye movement metrics, differences in eye movement patterns and high-level strategies were observed in the model building and hand-washing tasks. Performance in the hand-washing task revealed a new type of eye movement. \u27Planful\u27 eye movements were made to objects well in advance of a subject\u27s interaction with the object. Often occurring in the middle of another task, they provide \u27overlapping\u27 temporal information about the environment providing a mechanism to produce our conscious visual experience

    Collecting and Analyzing Eye-Tracking Data in Outdoor Environments

    Get PDF
    Natural outdoor conditions pose unique obstacles for researchers, above and beyond those inherent to all mobile eye-tracking research. During analyses of a large set of eye-tracking data collected on geologists examining outdoor scenes, we have found that the nature of calibration, pupil identification, fixation detection, and gaze analysis all require procedures different from those typically used for indoor studies. Here, we discuss each of these challenges and present solutions, which together define a general method useful for investigations relying on outdoor eye-tracking data. We also discuss recommendations for improving the tools that are available, to further increase the accuracy and utility of outdoor eye-tracking data

    EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking

    Full text link
    Ellipse fitting, an essential component in pupil or iris tracking based video oculography, is performed on previously segmented eye parts generated using various computer vision techniques. Several factors, such as occlusions due to eyelid shape, camera position or eyelashes, frequently break ellipse fitting algorithms that rely on well-defined pupil or iris edge segments. In this work, we propose training a convolutional neural network to directly segment entire elliptical structures and demonstrate that such a framework is robust to occlusions and offers superior pupil and iris tracking performance (at least 10%\% and 24%\% increase in pupil and iris center detection rate respectively within a two-pixel error margin) compared to using standard eye parts segmentation for multiple publicly available synthetic segmentation datasets

    Development of a Virtual Laboratory for the Study of Complex Human Behavior

    Get PDF
    The study of human perception has evolved from examining simple tasks executed in reduced laboratory conditions to the examination of complex, real-world behaviors. Virtual environments represent the next evolutionary step by allowing full stimulus control and repeatability for human subjects, and a testbed for evaluating models of human behavior. Visual resolution varies dramatically across the visual field, dropping orders of magnitude from central to peripheral vision. Humans move their gaze about a scene several times every second, projecting taskcritical areas of the scene onto the central retina. These eye movements are made even when the immediate task does not require high spatial resolution. Such “attentionally-driven” eye movements are important because they provide an externally observable marker of the way subjects deploy their attention while performing complex, real-world tasks. Tracking subjects’ eye movements while they perform complex tasks in virtual environments provides a window into perception. In addition to the ability to track subjects’ eyes in virtual environments, concurrent EEG recording provides a further indicator of cognitive state. We have developed a virtual reality laboratory in which head-mounted displays (HMDs) are instrumented with infrared video-based eyetrackers to monitor subjects’ eye movements while they perform a range of complex tasks such as driving, and manual tasks requiring careful eye-hand coordination. A go-kart mounted on a 6DOF motion platform provides kinesthetic feedback to subjects as they drive through a virtual town; a dual-haptic interface consisting of two SensAble Phantom extended range devices allows free motion and realistic force-feedback within a 1^3 m volume (Refer to PDF file for exact formulas)

    RITnet: Real-time Semantic Segmentation of the Eye for Gaze Tracking

    Full text link
    Accurate eye segmentation can improve eye-gaze estimation and support interactive computing based on visual attention; however, existing eye segmentation methods suffer from issues such as person-dependent accuracy, lack of robustness, and an inability to be run in real-time. Here, we present the RITnet model, which is a deep neural network that combines U-Net and DenseNet. RITnet is under 1 MB and achieves 95.3\% accuracy on the 2019 OpenEDS Semantic Segmentation challenge. Using a GeForce GTX 1080 Ti, RITnet tracks at >> 300Hz, enabling real-time gaze tracking applications. Pre-trained models and source code are available https://bitbucket.org/eye-ush/ritnet/.Comment: This model is the winning submission for OpenEDS Semantic Segmentation Challenge for Eye images https://research.fb.com/programs/openeds-challenge/. To appear in ICCVW 2019. ("Pre-trained models and source code are available https://bitbucket.org/eye-ush/ritnet/."

    The role of exocentric reference frames in the perception of visual direction

    Get PDF
    One classic piece of evidence for an efference copy signal of eye position is that a small, positive afterimage viewed in darkness is perceived to move with the eye. When a small stationary reference point is visible the afterimage appears to move relative to the reference point. However, this is true only when the afterimage is localized to a small area. We have observed that when an extended afterimage of a complex scene is generated by a brief, bright flash it does not appear to move, even with large changes in eye position. When subjects were instructed to maintain their direction of gaze, we observed small saccades (typically < 1 deg) and slow drift movements often totalling more than 10 deg over a 30 sec period. When the instructions were to simply inspect the extended afterimage, subjects made larger saccades (up to 5 deg) which were not accompanied by afterimage movement. The smaller movements observed under the first instructions are greater than those observed in the dark or with small afterimages. When a visible reference is present with these large afterimages, the afterimage appears stationary, while the reference point appears to move. Eye position was monitored following the generation of such afterimages. In general, the perceived motion of the stationary reference point was in a direction opposite to the motion of the eye. Similar drift movements of smaller magnitude were observed with localized afterimages, but the motion was attributed to the afterimage

    Characterization and reconstruction of VOG noise with power spectral density analysis

    No full text
    Characterizing noise in eye movement data is important for data analysis, as well as for the comparison of research results across systems. We present a method that characterizes and reconstructs the noise in eye movement data from video-oculography (VOG) systems taking into account the uneven sampling in real recordings due to track loss and inherent system features. The proposed method extends the Lomb-Scargle periodogram, which is used for the estimation of the power spectral density (PSD) of unevenly sampled data [Hocke and Kampfer 2009]. We estimate the PSD of fixational eye movement data and reconstruct the noise by applying a random phase to the inverse Fourier transform so that the reconstructed signal retains the amplitude of the original noise at each frequency. We apply this method to the EMRA/COGAIN Eye Data Quality Standardization project's dataset, which includes recordings from 11 commercially available VOG systems and a Dual Pukinje Image (DPI) eye tracker. The reconstructed noise from each VOG system was superimposed onto the DPI data and the resulting eye movement measures from the same original behaviors were compared
    corecore